Covid-19是由新型冠状病毒(SARS-COV-2)引起的疾病,于2019年12月下旬首次在中国武汉出现。不久之后,该病毒在全球范围内传播,并于3月被世界卫生组织宣布为大流行病。 2020年。这造成了世界各地和美国的许多变化,包括向在线学习的教育转变。在本文中,我们试图了解Covid-19-19的大流行和在线学习的增加如何影响大学生的情感福祉。我们使用几种机器学习和统计模型来分析卢布尔雅那大学公共行政学院,斯洛文尼亚大学,与国际大学,其他高等教育机构和学生协会一起收集的数据。我们的结果表明,与学生的学术生活有关的特征对他们的情感健康产生了最大的影响。其他重要因素包括学生对大学和政府对大流行的处理以及学生的财务安全的满意。
translated by 谷歌翻译
对于过去的几年来,冠状病毒通常被称为Covid-19,通过施加几年,通过施加几年,在美国居住在美国居住的所有公民的日常生活受到忽视。为了应对日益增长的恐惧和危险的Covid-19对美国的社会造成造成的,已经成为个人利用的常设补救措施。在本文中,我们研究了Covid-19疫苗和助推器之间的关系,以及美国多个州的冠状病毒的总案例计数。此外,本文讨论了几个,底层健康状况与Covid-19之间的关系。为了有效地讨论这些关系,本文将利用统计测试和机器学习方法进行分析和讨论。此外,本文反映了关于教育程度,种族和Covid-19之间关系的结论,以及可以以潜在的健康状况,疫苗接种率和Covid-19的总案例和死亡计数建立的可能连接。
translated by 谷歌翻译
在本文中,我们分析了Covid-19大流行对医疗工作者的一些影响。我们专注于使用从密歇根州大学大学大学联盟大学获得政治和社会研究的心理健康调查数据,专注于酒精消费习惯改变。我们使用监督和无监督的机器学习方法和模型,如决策树,Logistic回归,天真贝叶斯分类器,K到最近的邻居,支持向量机,多层森林,XGBoost,Catboost,LightGBM,合成少数群体过采样,Chi-Squared测试与互信息方法,了解Covid-19相关负面影响与医疗保健工人的饮酒变化的关系。我们的调查结果表明,Covid-19大流行的一些效果如学校关闭,工作时间表变更和科夫迪相关的新闻曝光可能导致酒精使用增加。
translated by 谷歌翻译
2019年12月底,首先在武汉中国首次确定了新型冠状病毒(SARS-COV-2)和所得疾病Covid-19。该疾病通过遏制措施滑落,其中一个已知的案例在美国在2020年1月20日被确定。在本文中,我们利用来自大学间财团的调查数据进行政治和社会研究,并应用几种统计和机器学习模型和技术,如决策树,多项式物流回归,天真贝叶斯,k-intele邻居,支持向量机,神经网络,随机森林,梯度树提升,Xgboost,Catboost,LightGBM,合成少数群体过采样和Chi-Squared测试分析Covid-19大流行对美国前线工人心理健康的影响。通过对适用于心理健康调查数据的许多模型的解释,我们已经得出结论,预测前线工人心理健康衰退的最重要因素是个人所在的医疗保健角色(护士,急诊室工作人员,外科医生, ),其次是个人在上周睡眠量,Covid-19相关新闻的数量在一天,工人的年龄和酒精和大麻的使用量平均消耗。
translated by 谷歌翻译
Differentiable Search Indices (DSIs) encode a corpus of documents in the parameters of a model and use the same model to map queries directly to relevant document identifiers. Despite the strong performance of DSI models, deploying them in situations where the corpus changes over time is computationally expensive because reindexing the corpus requires re-training the model. In this work, we introduce DSI++, a continual learning challenge for DSI to incrementally index new documents while being able to answer queries related to both previously and newly indexed documents. Across different model scales and document identifier representations, we show that continual indexing of new documents leads to considerable forgetting of previously indexed documents. We also hypothesize and verify that the model experiences forgetting events during training, leading to unstable learning. To mitigate these issues, we investigate two approaches. The first focuses on modifying the training dynamics. Flatter minima implicitly alleviate forgetting, so we optimize for flatter loss basins and show that the model stably memorizes more documents (+12\%). Next, we introduce a generative memory to sample pseudo-queries for documents and supplement them during continual indexing to prevent forgetting for the retrieval task. Extensive experiments on novel continual indexing benchmarks based on Natural Questions (NQ) and MS MARCO demonstrate that our proposed solution mitigates forgetting by a significant margin. Concretely, it improves the average Hits@10 by $+21.1\%$ over competitive baselines for NQ and requires $6$ times fewer model updates compared to re-training the DSI model for incrementally indexing five corpora in a sequence.
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
Chromosome analysis is essential for diagnosing genetic disorders. For hematologic malignancies, identification of somatic clonal aberrations by karyotype analysis remains the standard of care. However, karyotyping is costly and time-consuming because of the largely manual process and the expertise required in identifying and annotating aberrations. Efforts to automate karyotype analysis to date fell short in aberration detection. Using a training set of ~10k patient specimens and ~50k karyograms from over 5 years from the Fred Hutchinson Cancer Center, we created a labeled set of images representing individual chromosomes. These individual chromosomes were used to train and assess deep learning models for classifying the 24 human chromosomes and identifying chromosomal aberrations. The top-accuracy models utilized the recently introduced Topological Vision Transformers (TopViTs) with 2-level-block-Toeplitz masking, to incorporate structural inductive bias. TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome identification, and exhibited accuracies >99% for aberration detection in most aberrations. Notably, we were able to show high-quality performance even in "few shot" learning scenarios. Incorporating the definition of clonality substantially improved both precision and recall (sensitivity). When applied to "zero shot" scenarios, the model captured aberrations without training, with perfect precision at >50% recall. Together these results show that modern deep learning models can approach expert-level performance for chromosome aberration detection. To our knowledge, this is the first study demonstrating the downstream effectiveness of TopViTs. These results open up exciting opportunities for not only expediting patient results but providing a scalable technology for early screening of low-abundance chromosomal lesions.
translated by 谷歌翻译
Multi-objective feature selection is one of the most significant issues in the field of pattern recognition. It is challenging because it maximizes the classification performance and, at the same time, minimizes the number of selected features, and the mentioned two objectives are usually conflicting. To achieve a better Pareto optimal solution, metaheuristic optimization methods are widely used in many studies. However, the main drawback is the exploration of a large search space. Another problem with multi-objective feature selection approaches is the interaction between features. Selecting correlated features has negative effect on classification performance. To tackle these problems, we present a novel multi-objective feature selection method that has several advantages. Firstly, it considers the interaction between features using an advanced probability scheme. Secondly, it is based on the Pareto Archived Evolution Strategy (PAES) method that has several advantages such as simplicity and its speed in exploring the solution space. However, we improve the structure of PAES in such a way that generates the offsprings, intelligently. Thus, the proposed method utilizes the introduced probability scheme to produce more promising offsprings. Finally, it is equipped with a novel strategy that guides it to find the optimum number of features through the process of evolution. The experimental results show a significant improvement in finding the optimal Pareto front compared to state-of-the-art methods on different real-world datasets.
translated by 谷歌翻译